06. Output yaml files
Output .yaml
files
For a passing submission of this project, all you need to do is implement your perception pipeline in a new environment and correctly identify the majority of objects in three different scenarios. In each scenario, you'll be faced with the same robot environment and a different collection of objects on the table. In each case you'll be provided with a "pick list" of objects that you're looking for.
These pick lists exist as yaml
files under /pr2_robot/config
and are loaded to the ros parameter server
with the following line in your project launch file:
<rosparam command="load" file="$(find pr2_robot)/config/{PICK_LIST_NAME}.yaml"/>
If you open up the project launch file called pick_place_project.launch
in pr2_robot/launch
, you'll find the default pick list is set to pick_list_1.yaml
, which looks like this:
object_list:
- name: biscuits
group: green
- name: soap
group: green
- name: soap2
group: red
This list dictates the order in which objects are to be collected, along with each object's group, which determines which dropbox the object goes into.
You can retrieve the list from the parameter server and parse it in the following manner. Since the header of the pick list file was object_list
, that is the parameter name under which is loaded. To read this parameter, you can use:
# get parameters
object_list_param = rospy.get_param('/object_list')
The object_list_param
now holds the pick list and can be parsed to obtain object names and associated group.
object_name = object_list_param[i]['name']
object_group = object_list_param[i]['group']
where i = index used to traverse the list.
For each item in the list you must identify the associated object in the scene and calculate its centroid. To calculate an object's centroid (average position of all the points in the object cloud) you can use a handy pcl
method to convert the cloud to an array. First though, recall that in your perception pipeline, you created a list of detected objects (of message type DetectedObject
) like this:
# Add the detected object to the list of detected objects.
do = DetectedObject()
do.label = label
do.cloud = ros_cluster
detected_objects.append(do)
For each item in the list, you'll need to compare the label with the pick list and provide the centroid. You can grab the labels, access the (x, y, z) coordinates of each point and compute the centroid like this:
labels = []
centroids = [] # to be list of tuples (x, y, z)
for object in objects:
labels.append(object.label)
points_arr = ros_to_pcl(object.cloud).to_array()
centroids.append(np.mean(points_arr, axis=0)[:3])
WARNING: ROS messages expect native Python data types but having computed centroids as above your list centroids will be of type numpy.float64
. To recast to native Python float
type you can use np.asscalar()
, for example:
>>> import numpy as np
>>> x=np.mean([1.27,2.2,3.99])
>>> print(x)
2.48666666667
>>> type(x)
<class 'numpy.float64'>
>>> type(np.asscalar(x))
<class 'float'>
Now that you have the labels and centroids for target objects in the pick list, you can create the necessary ROS messages to send to the pick_place_routine
service.
Note: creating these messages successfully and outputting them to .yaml
file format is all you are required to do to complete this project. After that, if you're interested in a challenge you can work on sending these messages and completing the pick and place operations.
The pick and place operation is implemented as a request-response system, where you must write a ros_client
to extend a request to the pr2_pick_place_server
. Have a look at PickPlace.srv
in pr2_robot/srv
. This script defines the format of the service message:
# request
std_msgs/Int32 test_scene_num
std_msgs/String object_name
std_msgs/String arm_name
geometry_msgs/Pose pick_pose
geometry_msgs/Pose place_pose
---
# response
bool success
The request your ros_client
sends to the pr2_pick_place_server
must adhere to the above format and contain:
Name | Message Type | Description | Valid Values |
---|---|---|---|
test_scene_num | std_msgs/Int32 |
The test scene you are working with | 1,2,3 |
object_name | std_msgs/String |
Name of the object, obtained from the pick-list | - |
arm_name | std_msgs/String |
Name of the arm | right, left |
pick_pose | geometry_msgs/Pose |
Calculated Pose of recognized object's centroid | - |
place_pose | geometry_msgs/Pose |
Object placement Pose | - |
You can obtain the test_scene_num by referring to the world file name, loaded in your pick_place_project.launch
file. The line in your launch file looks like this:
<arg name="world_name" value="$(find pr2_robot)/worlds/test1.world"/>
Indicating in this case that the test_scene_num value is 1.
But be careful, the message type you need to send for this variable is not a simple integer but a ROS message of type std_msgs/Int32
.
To get more information on this message type. In a terminal type in the following command:
$ rosmsg info std_msgs/Int32
and you'll get:
int32 data
Which means that this message type simply contains a data field of int32 type.
Meaning you cannot simply assign a number to std_msgs/Int32
. instead, you must first import this message type into your node:
from std_msgs.msg import Int32
Next, initialize the test_scene_num variable:
test_scene_num = Int32()
And then populate the appropriate data field
test_scene_num.data = 1
Next, the object_name is directly obtained by reading off of the pick list. But again, this is not a simple string but a std_msgs/String
type ros message. Just like before, you can investigate the contents of this message type by:
$ rosmsg info std_msgs/String
And find that it contains a single data element:
string data
So in the script for your ROS node import the String
message type and populate it with the appropriate value:
# import the message type
from std_msgs.msg import String
# Initialize a variable
object_name = String()
# Populate the data field
object_name.data = object_list_param[i]['name']
Based on the group associated with each object (that you extracted from the pick list .yaml
file), decide which arm of the robot should be used.
Since the green box is located on the right side of the robot, select the right arm for objects with green
group and left arm for objects with red
group. Like you did with object_name
, initialize the appropriate message type and populate it with the name of the arm.
Previously you wrote code to calculate the centroid of an identified object, this centroid will now be passed as the pick_pose variable.
Since the message type for pick_pose is geometry_msgs/Pose
, just like previous ROS message types, investigate it's contents, import the message type, initialize and then populate the fields with appropriate data.
$ rosmsg info geometry_msgs/Pose
# import message type
from geometry_msgs.msg import Pose
# initialize an empty pose message
pick_pose = Pose()
# TODO: fill in appropriate fields
Now the final argument to be passed for a pick and place request is place_pose. You can retrieve the place pose from the dropbox.yaml
.
Just like the pick_list.yaml
file, dropbox.yaml
is also loaded into the parameter server via the pick_place_project.launch
by this line of code:
<rosparam command="load" file="$(find pr2_robot)/config/dropbox.yaml"/>
The dropbox.yaml
file looks like this:
dropbox:
- name: left
group: red
position: [0,0.71,0.605]
- name: right
group: green
position: [0,-0.71,0.605]
The position parameters give you the (x, y, z) positions of the centers of the bottom of each drop box.
Similar to what you did to parse the pick list .yaml
file, parse dropbox.yaml
and retrieve the position parameters. Then for each pick and place operation, populate the place_pose
positions with those values.
And that's it! You now have all the messages you need and you're ready to create a .yaml
output file. The way it will actually work within your code is that you'll iterate over each item in the pick list, see whether you found it in your perception analysis, then if you did, populate the pick_pose
message with the centroid. Since you'll do this with each object one at a time, a convenient way to save the messages for each object is in a list of dictionaries.
We've provided a make_yaml_dict()
helper function to help you convert messages to dictionaries. It looks like this:
def make_yaml_dict(test_scene_num, arm_name, object_name, pick_pose, place_pose):
yaml_dict = {}
yaml_dict["test_scene_num"] = test_scene_num.data
yaml_dict["arm_name"] = arm_name.data
yaml_dict["object_name"] = object_name.data
yaml_dict["pick_pose"] = message_converter.convert_ros_message_to_dictionary(pick_pose)
yaml_dict["place_pose"] = message_converter.convert_ros_message_to_dictionary(place_pose)
return yaml_dict
This function takes advantage of further helper code in the rospy_message_converter
package to convert ROS messages to dictionary format. With each iteration over the pick list, you can create a dictionary with the above function and then generate a list of dictionaries containing all your ROS service request messages. So for example, your for
loop might look like this:
dict_list = []
for i in range(0, len(object_list_param)):
# Populate various ROS messages
yaml_dict = make_yaml_dict(test_scene_num, arm_name, object_name, pick_pose, place_pose)
dict_list.append(yaml_dict)
After finishing iterations, you can send your list of dictionaries to a .yaml
file using the yaml
Python package and another helper function we provide called send_to_yaml()
which looks like this:
def send_to_yaml(yaml_filename, dict_list):
data_dict = {"object_list": dict_list}
with open(yaml_filename, 'w') as outfile:
yaml.dump(data_dict, outfile, default_flow_style=False)